The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
Downloads:3859
Type:Epub+TxT+PDF+Mobi
Create Date:2021-04-24 09:30:58
Update Date:2025-09-06
Status:finish
Author:Erik J. Larson
ISBN:B08TV31WJ3
Environment:PC/Android/iPhone/iPad/Kindle
Reviews
Ben Chugg,
There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models。 Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed。 Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building career There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models。 Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed。 Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building careers based on erroneous predictions, or prophesying that such a development spells the doom of the human race。 The AI space is dominated by vague arguments and absolute certainty in the conclusions。 Onto the scene steps Erik Larson, an engineer who understands both how these systems work and their philosophical assumptions。 Larson points out that all our machine learning models are built on induction: inferring general patterns from specific observations。 We feed an algorithm 10,000 labelled pictures and it infers which relationships among the pixels are most likely to predict "cat"。 Some models are faster than others, more clever in their pattern recognition, and so on, but at bottom they're all doing the same thing: correlating datasets。 We know of only one system capable of universal intelligence: human brains。 And humans don't learn by induction。 We don't infer the general from the specific。 Instead, we guess the general and use the specifics to refute our guesses。 We use our creativity to conjecture aspects of the world (space-time is curved, Ryan is lying, my shoes are in my backpack), and use empirical observations to disavow us of those ideas that are false。 This is why humans are capable of developing general theories of the world。 Induction implies that you can only know what you see (a philosophy called "empiricism") - but that's false (we've never seen the inside of a star, yet we develop theories which explain the phenomena)。 Charles Sanders Pierce called the method of guessing and checking "abduction。" And we have no good theory for abduction。 To have one, we would have to better understand human creativity, which plays a central role in knowledge creation。 In other words, we need a philosophical and scientific revolution before we can possibly generate true artificial intelligence。 As long as we keep relying on induction, machines will be forever constrained by what data they are fed。 Larson argues that the philosophical confusion over induction and the current focus on "big-data" is infecting other areas of science。 Many neuroscience departments have forgotten the role that theories play in advancing our knowledge, and are hoping that a true understanding of the human brain will be borne out of simply mapping it more accurately。 But this is hopeless。 Even after having developed an accurate map, what will you look for? There is no such thing as observation without theory。 At a time when it's in fashion to point out all the biases and "irrationalities" in human thinking, hopefully the book helps remind us of the amazing ability of humans to create general purpose knowledge。 Highly recommended read。 。。。more